Goto

Collaborating Authors

 per-example weight



f7ac67a9aa8d255282de7d11391e1b69-AuthorFeedback.pdf

Neural Information Processing Systems

Inthemain6 objective, the program optimizes forΛ based on the supervised loss of the "validation" set. SSL typically uses an'unsupervised loss' to15 leverage unlabeled data. While the model may not generalize if the unsupervised loss is poorly designed, recent16 works [38, 36] empirically validate their proposed loss. Theoretical analysis of SSL has also been provided under17 various assumptions,e.g., [6, A]. Weencourage R1 to study these works which show how unsupervised losses aid18 generalization.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

Q2: Please summarize your review in 1-2 sentences The paper proposes a modified SVM learning algorithm in which the loss function is modified by a per-example weight. However, example dependent costs are already widely used in machine learning.





Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning

Ren, Zhongzheng, Yeh, Raymond A., Schwing, Alexander G.

arXiv.org Machine Learning

Existing semi-supervised learning (SSL) algorithms use a single weight to balance the loss of labeled and unlabeled examples, i.e., all unlabeled examples are equally weighted. But not all unlabeled data are equal. In this paper we study how to use a different weight for every unlabeled example. Manual tuning of all those weights -- as done in prior work -- is no longer possible. Instead, we adjust those weights via an algorithm based on the influence function, a measure of a model's dependency on one training example. To make the approach efficient, we propose a fast and effective approximation of the influence function. We demonstrate that this technique outperforms state-of-the-art methods on semi-supervised image and language classification tasks.